228 research outputs found

    Pointwise consistency of the kriging predictor with known mean and covariance functions

    Full text link
    This paper deals with several issues related to the pointwise consistency of the kriging predictor when the mean and the covariance functions are known. These questions are of general importance in the context of computer experiments. The analysis is based on the properties of approximations in reproducing kernel Hilbert spaces. We fix an erroneous claim of Yakowitz and Szidarovszky (J. Multivariate Analysis, 1985) that the kriging predictor is pointwise consistent for all continuous sample paths under some assumptions.Comment: Submitted to mODa9 (the Model-Oriented Data Analysis and Optimum Design Conference), 14th-19th June 2010, Bertinoro, Ital

    Translation invariant maps on function spaces over locally compact groups

    Full text link
    [EN] We prove that under adequate geometric requirements, translation invariant mappings between vector-valued quasi-Banach function spaces on a locally compact group G have a bounded extension between Kothe-Bochner spaces L-r (G, E). The class of mappings for which our results apply includes polynomials and multilinear operators. We develop an abstract approach based on some new tools as abstract convolution and matching among Banach function lattices, and also on some classical techniques as Maurey-Rosenthal factorization of operators. As a by-product we show when Haar measures which appear in certain factorization theorems for nonlinear mappings are in fact Pietsch measures. We also give applications to operators between Kothe-Bochner spaces. (C) 2018 Elsevier Inc. All rights reserved.The second named author was supported by National Science Centre, Poland, project, no. 2015/17/B/ST1/00064. The third named author was supported by Ministerio de Economia, Industria y Competitividad (Spain) and FEDER, (project MTM2016-77054-C2-1-P2). We thank the referee for careful reading of the paper and useful remarks.Defant, A.; Mastylo, M.; Sánchez Pérez, EA.; Steinwart, I. (2019). Translation invariant maps on function spaces over locally compact groups. Journal of Mathematical Analysis and Applications. 470(2):795-820. https://doi.org/10.1016/j.jmaa.2018.10.033S795820470

    Singular Value Decomposition of Operators on Reproducing Kernel Hilbert Spaces

    Full text link
    Reproducing kernel Hilbert spaces (RKHSs) play an important role in many statistics and machine learning applications ranging from support vector machines to Gaussian processes and kernel embeddings of distributions. Operators acting on such spaces are, for instance, required to embed conditional probability distributions in order to implement the kernel Bayes rule and build sequential data models. It was recently shown that transfer operators such as the Perron-Frobenius or Koopman operator can also be approximated in a similar fashion using covariance and cross-covariance operators and that eigenfunctions of these operators can be obtained by solving associated matrix eigenvalue problems. The goal of this paper is to provide a solid functional analytic foundation for the eigenvalue decomposition of RKHS operators and to extend the approach to the singular value decomposition. The results are illustrated with simple guiding examples

    An Overview of the Use of Neural Networks for Data Mining Tasks

    Get PDF
    In the recent years the area of data mining has experienced a considerable demand for technologies that extract knowledge from large and complex data sources. There is a substantial commercial interest as well as research investigations in the area that aim to develop new and improved approaches for extracting information, relationships, and patterns from datasets. Artificial Neural Networks (NN) are popular biologically inspired intelligent methodologies, whose classification, prediction and pattern recognition capabilities have been utilised successfully in many areas, including science, engineering, medicine, business, banking, telecommunication, and many other fields. This paper highlights from a data mining perspective the implementation of NN, using supervised and unsupervised learning, for pattern recognition, classification, prediction and cluster analysis, and focuses the discussion on their usage in bioinformatics and financial data analysis tasks

    Improving SIEM for critical SCADA water infrastructures using machine learning

    Get PDF
    Network Control Systems (NAC) have been used in many industrial processes. They aim to reduce the human factor burden and efficiently handle the complex process and communication of those systems. Supervisory control and data acquisition (SCADA) systems are used in industrial, infrastructure and facility processes (e.g. manufacturing, fabrication, oil and water pipelines, building ventilation, etc.) Like other Internet of Things (IoT) implementations, SCADA systems are vulnerable to cyber-attacks, therefore, a robust anomaly detection is a major requirement. However, having an accurate anomaly detection system is not an easy task, due to the difficulty to differentiate between cyber-attacks and system internal failures (e.g. hardware failures). In this paper, we present a model that detects anomaly events in a water system controlled by SCADA. Six Machine Learning techniques have been used in building and evaluating the model. The model classifies different anomaly events including hardware failures (e.g. sensor failures), sabotage and cyber-attacks (e.g. DoS and Spoofing). Unlike other detection systems, our proposed work helps in accelerating the mitigation process by notifying the operator with additional information when an anomaly occurs. This additional information includes the probability and confidence level of event(s) occurring. The model is trained and tested using a real-world dataset

    A two-step learning approach for solving full and almost full cold start problems in dyadic prediction

    Full text link
    Dyadic prediction methods operate on pairs of objects (dyads), aiming to infer labels for out-of-sample dyads. We consider the full and almost full cold start problem in dyadic prediction, a setting that occurs when both objects in an out-of-sample dyad have not been observed during training, or if one of them has been observed, but very few times. A popular approach for addressing this problem is to train a model that makes predictions based on a pairwise feature representation of the dyads, or, in case of kernel methods, based on a tensor product pairwise kernel. As an alternative to such a kernel approach, we introduce a novel two-step learning algorithm that borrows ideas from the fields of pairwise learning and spectral filtering. We show theoretically that the two-step method is very closely related to the tensor product kernel approach, and experimentally that it yields a slightly better predictive performance. Moreover, unlike existing tensor product kernel methods, the two-step method allows closed-form solutions for training and parameter selection via cross-validation estimates both in the full and almost full cold start settings, making the approach much more efficient and straightforward to implement

    Theoretical Insights into the Use of Structural Similarity Index In Generative Models and Inferential Autoencoders

    Full text link
    Generative models and inferential autoencoders mostly make use of 2\ell_2 norm in their optimization objectives. In order to generate perceptually better images, this short paper theoretically discusses how to use Structural Similarity Index (SSIM) in generative models and inferential autoencoders. We first review SSIM, SSIM distance metrics, and SSIM kernel. We show that the SSIM kernel is a universal kernel and thus can be used in unconditional and conditional generated moment matching networks. Then, we explain how to use SSIM distance in variational and adversarial autoencoders and unconditional and conditional Generative Adversarial Networks (GANs). Finally, we propose to use SSIM distance rather than 2\ell_2 norm in least squares GAN.Comment: Accepted (to appear) in International Conference on Image Analysis and Recognition (ICIAR) 2020, Springe

    MiL Testing of Highly Configurable Continuous Controllers: Scalable Search Using Surrogate Models

    Get PDF
    Continuous controllers have been widely used in automotive do- main to monitor and control physical components. These con- trollers are subject to three rounds of testing: Model-in-the-Loop (MiL), Software-in-the-Loop and Hardware-in-the-Loop. In our earlier work, we used meta-heuristic search to automate MiL test- ing of fixed configurations of continuous controllers. In this paper, we extend our work to support MiL testing of all feasible configura- tions of continuous controllers. Specifically, we use a combination of dimensionality reduction and surrogate modeling techniques to scale our earlier MiL testing approach to large, multi-dimensional input spaces formed by configuration parameters. We evaluated our approach by applying it to a complex, industrial continuous controller. Our experiment shows that our approach identifies test cases indicating requirements violations. Further, we demonstrate that dimensionally reduction helps generate surrogate models with higher prediction accuracy. Finally, we show that combining our search algorithm with surrogate modelling improves its efficiency for two out of three requirements

    Rademacher chaos complexities for learning the kernel problem

    Get PDF
    Copyright © 2010 The MIT PressCopyright © 2010 Massachusetts Institute of TechnologyWe develop a novel generalization bound for learning the kernel problem. First, we show that the generalization analysis of the kernel learning problem reduces to investigation of the suprema of the Rademacher chaos process of order 2 over candidate kernels, which we refer to as Rademacher chaos complexity. Next, we show how to estimate the empirical Rademacher chaos complexity by well-established metric entropy integrals and pseudo-dimension of the set of candidate kernels. Our new methodology mainly depends on the principal theory of U-processes and entropy integrals. Finally, we establish satisfactory excess generalization bounds and misclassification error rates for learning gaussian kernels and general radial basis kernels
    corecore